Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPEW34M/43BHB8L
Repositorysid.inpe.br/sibgrapi/2020/09.30.23.54
Last Update2020:09.30.23.54.49 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2020/09.30.23.54.49
Metadata Last Update2022:06.10.19.41.23 (UTC) administrator
DOI10.1109/SIBGRAPI51738.2020.00010
Citation KeyCordeiroCarn:2020:HoTrYo
TitleA Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?
FormatOn-line
Year2020
Access Date2024, May 03
Number of Files1
Size494 KiB
2. Context
Author1 Cordeiro, Filipe Rolim
2 Carneiro, Gustavo
Affiliation1 Universidade Federal Rural de Pernambuco
2 University of Adelaide
EditorMusse, Soraia Raupp
Cesar Junior, Roberto Marcondes
Pelechano, Nuria
Wang, Zhangyang (Atlas)
e-Mail Addressfilipe.rolim@ufrpe.br
Conference NameConference on Graphics, Patterns and Images, 33 (SIBGRAPI)
Conference LocationPorto de Galinhas (virtual)
Date7-10 Nov. 2020
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeTutorial
History (UTC)2020-09-30 23:54:49 :: filipe.rolim@ufrpe.br -> administrator ::
2022-06-10 19:41:23 :: administrator -> filipe.rolim@ufrpe.br :: 2020
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
Keywordsnoisy labels
deep learning
AbstractNoisy Labels are commonly present in data sets automatically collected from the internet, mislabeled by non- specialist annotators, or even specialists in a challenging task, such as in the medical field. Although deep learning models have shown significant improvements in different domains, an open issue is their ability to memorize noisy labels during training, reducing their generalization potential. As deep learning models depend on correctly labeled data sets and label correctness is difficult to guarantee, it is crucial to consider the presence of noisy labels for deep learning training. Several approaches have been proposed in the literature to improve the training of deep learning models in the presence of noisy labels. This paper presents a survey on the main techniques in literature, in which we classify the algorithm in the following groups: robust losses, sample weighting, sample selection, meta-learning, and combined approaches. We also present the commonly used experimental setup, data sets, and results of the state-of-the-art models.
Arrangementurlib.net > SDLA > Fonds > SIBGRAPI 2020 > A Survey on...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 30/09/2020 20:54 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPEW34M/43BHB8L
zipped data URLhttp://urlib.net/zip/8JMKD3MGPEW34M/43BHB8L
Languageen
Target FileTutorial_ID_4_SIBGRAPI_2020_camara_ready_v2 copy.pdf
User Groupfilipe.rolim@ufrpe.br
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPEW34M/43G4L9S
Citing Item Listsid.inpe.br/sibgrapi/2020/10.28.20.46 4
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume
7. Description control
e-Mail (login)filipe.rolim@ufrpe.br
update 


Close